10 research outputs found

    Perception Driven Texture Generation

    Full text link
    This paper investigates a novel task of generating texture images from perceptual descriptions. Previous work on texture generation focused on either synthesis from examples or generation from procedural models. Generating textures from perceptual attributes have not been well studied yet. Meanwhile, perceptual attributes, such as directionality, regularity and roughness are important factors for human observers to describe a texture. In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input. In this model, a preliminary trained convolutional neural network is essentially integrated with the adversarial framework, which can drive the generated textures to possess given perceptual attributes. An important aspect of the proposed model is that, if we change one of the input perceptual features, the corresponding appearance of the generated textures will also be changed. We design several experiments to validate the effectiveness of the proposed method. The results show that the proposed method can produce high quality texture images with desired perceptual properties.Comment: 7 pages, 4 figures, icme201

    Predicting and generating wallpaper texture with semantic properties

    Get PDF

    Arbitrary-scale texture generation from coarse-grained control

    No full text
    Existing deep-network based texture synthesis approaches all focus on fine-grained control of texture generation by synthesizing images from exemplars. Since the networks employed by most of these methods are always tied to individual exemplar textures, a large number of individual networks have to be trained when modeling various textures. In this paper, we propose to generate textures directly from coarse-grained control or high-level guidance, such as texture categories, perceptual attributes and semantic descriptions. We fulfill the task by parsing the generation process of a texture into the threelevel Bayesian hierarchical model. A coarse-grained signal first determines a distribution over Markov random fields. Then a Markov random field is used to model the distribution of the final output textures. Finally, an output texture is generated from the sampled Markov random field distribution. At the bottom level of the Bayesian hierarchy, the isotropic and ergodic characteristics of the textures favor a construction that consists of a fully convolutional network. The proposed method integrates texture creation and texture synthesis into one pipeline for real-time texture generation, and enables users to readily obtain diverse textures with arbitrary scales from high-level guidance only. Extensive experiments demonstrate that the proposed method is capable of generating plausible textures that are faithful to userdefined control, and achieving impressive texture metamorphosis by interpolation in the learned texture manifold

    Perceptual texture similarity learning using deep neural networks

    No full text

    Attention Fusion of Transformer-Based and Scale-Based Method for Hyperspectral and LiDAR Joint Classification

    No full text
    In recent years, there have been many multimodal works in the field of remote sensing, and most of them have achieved good results in the task of land-cover classification. However, multi-scale information is seldom considered in the multi-modal fusion process. Secondly, the multimodal fusion task rarely considers the application of attention mechanism, resulting in a weak representation of the fused feature. In order to better use the multimodal data and reduce the losses caused by the fusion of different modalities, we proposed a TRMSF (Transformer and Multi-scale fusion) network for land-cover classification based on HSI (hyperspectral images) and LiDAR (Light Detection and Ranging) images joint classification. The network enhances multimodal information fusion ability by the method of attention mechanism from Transformer and enhancement using multi-scale information to fuse features from different modal structures. The network consists of three parts: multi-scale attention enhancement module (MSAE), multimodality fusion module (MMF) and multi-output module (MOM). MSAE enhances the ability of feature representation from extracting different multi-scale features of HSI, which are used to fuse with LiDAR feature, respectively. MMF integrates the data of different modalities through attention mechanism, thereby reducing the loss caused by the data fusion of different modal structures. MOM optimizes the network by controlling different outputs and enhances the stability of the results. The experimental results show that the proposed network is effective in multimodality joint classification

    A Perception-Inspired Deep Learning Framework for Predicting Perceptual Texture Similarity

    Full text link
    Similarity learning plays a fundamental role in the fields of multimedia retrieval and pattern recognition. Prediction of perceptual similarity is a challenging task as in most cases we lack human labeled ground-truth data and robust models to mimic human visual perception. Although in the literature, some studies have been dedicated to similarity learning, they mainly focus on the evaluation of whether or not two images are similar, rather than prediction of perceptual similarity which is consistent with human perception. Inspired by the human visual perception mechanism, we here propose a novel framework in order to predict perceptual similarity between two texture images. Our proposed framework is built on the top of Convolutional Neural Networks (CNNs). The proposed framework considers both powerful features and perceptual characteristics of contours extracted from the images. The similarity value is computed by aggregating resemblances between the corresponding convolutional layer activations of the two texture maps. Experimental results show that the predicted similarity values are consistent with the human-perceived similarity data

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    No full text
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field
    corecore